7 research outputs found

    A Regulatory Theory of Cortical Organization and its Applications to Robotics

    No full text
    Fundamental aspects of biologically-inspired regulatory mechanisms are considered in a robotics context, using artificial neural-network control systems . Regulatory mechanisms are used to control expression of genes, adaptation of form and behavior in organisms. Traditional neural network control architectures assume networks of neurons are fixed and are interconnected by wires. However, these architectures tend to be specified by a designer and are faced with several limitations that reduce scalability and tractability for tasks with larger search spaces. Traditional methods used to overcome these limitations with fixed network topologies are to provide more supervision by a designer. More supervision as shown does not guarantee improvement during training particularly when making incorrect assumptions for little known task domains. Biological organisms often do not require such external intervention (more supervision) and have self-organized through adaptation. Artificial neural tissues (ANT) addresses limitations with current neural-network architectures by modeling both wired interactions between neurons and wireless interactions through use of chemical diffusion fields. An evolutionary (Darwinian) selection process is used to ‘breed’ ANT controllers for a task at hand and the framework facilitates emergence of creative solutions since only a system goal function and a generic set of basis behaviours need be defined. Regulatory mechanisms are formed dynamically within ANT through superpositioning of chemical diffusion fields from multiple sources and are used to select neuronal groups. Regulation drives competition and cooperation among neuronal groups and results in areas of specialization forming within the tissue. These regulatory mechanisms are also shown to increase tractability without requiring more supervision using a new statistical theory developed to predict performance characteristics of fixed network topologies. Simulations also confirm the significance of regulatory mechanisms in solving certain tasks found intractable for fixed network topologies. The framework also shows general improvement in training performance against existing fixed-topology neural network controllers for several robotic and control tasks. ANT controllers evolved in a low-fidelity simulation environment have been demonstrated for a number of tasks on hardware using groups of mobile robots and have given insight into self-organizing system. Evidence of sparse activity and use of decentralized, distributed functionality within ANT controller solutions are found consistent with observations from neurobiology.Ph

    Coevolving Communication and Cooperation for Lattice formation Tasks

    No full text
    Abstract. Reactive multi-agent systems are shown to coevolve with explicit communication and cooperative behavior to solve lattice formation tasks. Comparable agents that lack the ability to communicate and cooperate are shown to be unsuccessful in solving the same tasks. The control system for these agents consists of identical cellular automata lookup tables handling communication, cooperation and motion subsystems.

    Application of an Artificial Neural Tissue Controller to Multirobot Lunar ISRU Operations

    No full text
    Abstract. Automation of mining and resource utilization processes on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot tasks. An Artificial Neural Tissue (ANT) approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to train controllers for the task at hand in simulation and is verified on hardware. This process results in the emergence of novel functionality through the task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot resource gathering task in which teams of robots with no explicit supervision can successfully avoid obstacles, explore terrain, locate resource material and collect it in a designated area by using a light beacon for reference and interpreting unlabeled perimeter markings

    Evolving multirobot excavation controllers and choice of platforms using an artificial neural tissue paradigm

    No full text
    Autonomous robotic excavation has often been limited to a single robotic platform using a specified excavation vehicle. This paper presents a novel method for developing scalable controllers for use in multirobot scenarios and that do not require human defined operations scripts nor extensive modeling of the kinematics and dynamics of the excavation vehicles. Furthermore, the control system does not require specifying an excavation vehicle type such as a bulldozer, front-loader or bucket-wheel and it can evolve to select for an appropriate choice of excavation vehicles to successfully complete a task. The Ăƒ Ă‚Â¿artificial neural tissueĂƒ Ă‚Â¿ (ANT) architecture is used as a control system for autonomous multirobot excavation and clearing tasks. This control architecture combines a variable-topology neural-network structure with a coarse-coding strategy that permits specialized areas to develop in the tissue. Training is done in a low-fidelity grid world simulation environment and where a single global fitness function and a set of allowable basis behaviors need be specified. This approach is found to provide improved training performance over fixed-topology neural networks and can be easily ported onto different robot platforms. Aspects of the controller functionality have been tested using high fidelity dynamics simulation and in hardware. An evolutionary training process discovers novel decentralized methods of cooperation employing aggregation behaviors (via synchronized movements). These aggregation behaviors are found to improve controller scalability (with increasing robot density) and better handle robot interference (antagonism) that reduces the overall efficiency of the group
    corecore